50 research outputs found

    I see what you did there: understanding when to trust a ML model with NOVA

    Get PDF
    In this demo paper we present NOVA, a machine learning and explanation interface that focuses on the automated analysis of social interactions. NOVA combines Cooperative Machine Learning (CML) and explainable AI (XAI) methods to reduce manual labelling efforts while simultaneously generating an intuitive understanding of the learning process of a classification system. Therefore, NOVA features a semi-automated labelling process in which users are provided with immediate visual feedback on the predictions, which gives insights into the strengths and weaknesses of the underlying classification system. Following an interactive and exploratory workflow, the performance of the model can be improved by manual revision of the predictions

    On the potential of modular voice conversion for virtual agents

    Get PDF

    Socially-aware personality adaptation

    Get PDF

    Man continues to climb Gunung Rajah despite brush with death

    Get PDF
    Bentong: Despite being struck by lightning while hiking up Gunung Rajah 11 years ago, Muhammad Sobri Zainal continues to help hikers climb the mountai

    Interpreting Psychophysiological States Using Unobtrusive Wearable Sensors in Virtual Reality

    Get PDF
    One of the main challenges in the study of human be- havior is to quantitatively assess the participants’ affective states by measuring their psychophysiological signals in ecologically valid conditions. The quality of the acquired data, in fact, is often poor due to artifacts generated by natural interactions such as full body movements and gestures. We created a technology to address this problem. We enhanced the eXperience Induction Machine (XIM), an immersive space we built to conduct experiments on human behavior, with unobtrusive wearable sensors that measure electrocardiogram, breathing rate and electrodermal response. We conducted an empirical validation where participants wearing these sensors were free to move in the XIM space while exposed to a series of visual stimuli taken from the International Affective Picture System (IAPS). Our main result consists in the quan- titative estimation of the arousal range of the affective stimuli through the analysis of participants’ psychophysiological states. Taken together, our findings show that the XIM constitutes a novel tool to study human behavior in life-like conditions

    Laugh machine

    Full text link
    The Laugh Machine project aims at endowing virtual agents with the capability to laugh naturally, at the right moment and with the correct intensity, when interacting with human participants. In this report we present the technical development and evaluation of such an agent in one specific scenario: watching TV along with a participant. The agent must be able to react to both, the video and the participant’s behaviour. A full processing chain has been implemented, inte- grating components to sense the human behaviours, decide when and how to laugh and, finally, synthesize audiovisual laughter animations. The system was evaluated in its capability to enhance the affective experience of naive participants, with the help of pre and post-experiment questionnaires. Three interaction conditions have been compared: laughter-enabled or not, reacting to the participant’s behaviour or not. Preliminary results (the number of experiments is currently to small to obtain statistically significant differences) show that the interactive, laughter-enabled agent is positively perceived and is increasing the emotional dimension of the experiment

    From Synchronous to Asynchronous Event-driven Fusion Approaches in Multi-modal Affect Recognition

    No full text
    The cues that describe emotional conditions are encoded within multiple modalities and fusion of multi-modal information is a natural way to improve the automated recognition of emotions. Throughout many studies, we see traditional fusion approaches in which decisions are synchronously forced for fixed time segments across all considered modalities and generic combination rules are applied. Varying success is reported, sometimes performance is worse than uni-modal classification. Starting from these premises, this thesis investigates and compares the performance of various synchronous fusion techniques. We enrich the traditional set with custom and emotion adapted fusion algorithms that are tailored towards the affect recognition domain they are used in. These developments enhance recognition quality to a certain degree, but do not solve the sometimes occurring performance problems. To isolate the issue, we conduct a systematic investigation of synchronous fusion techniques on acted and natural data and conclude that the synchronous fusion approach shows a crucial weakness especially on non-acted emotions: The implicit assumption that relevant affective cues happen at the same time across all modalities is only true if emotions are depicted very coherent and clear - which we cannot expect in a natural setting. This implies a switch to asynchronous fusion approaches. This change can be realized by the application of classification models with memory capabilities (\eg recurrent neural networks), but these are often data hungry and non-transparent. We consequently present an alternative approach to asynchronous modality treatment: The event-driven fusion strategy, in which modalities decide when to contribute information to the fusion process in the form of affective events. These events can be used to introduce an additional abstraction layer to the recognition process, as provided events do not necessarily need to match the sought target class but can be cues that indicate the final assessment. Furthermore, we will see that the architecture of an event-driven fusion system is well suited for real-time usage and is very tolerant to temporarily missing input from single modalities and is therefore a good choice for affect recognition in the wild. We will demonstrate mentioned capabilities in various comparison and prototype studies and present the application of event-driven fusion strategies in multiple European research projects
    corecore